45 research outputs found

    Quaternion Graph Neural Networks

    Full text link
    Recently, graph neural networks (GNNs) become a principal research direction to learn low-dimensional continuous embeddings of nodes and graphs to predict node and graph labels, respectively. However, Euclidean embeddings have high distortion when using GNNs to model complex graphs such as social networks. Furthermore, existing GNNs are not very efficient with the high number of model parameters when increasing the number of hidden layers. Therefore, we move beyond the Euclidean space to a hyper-complex vector space to improve graph representation quality and reduce the number of model parameters. To this end, we propose quaternion graph neural networks (QGNN) to generalize GCNs within the Quaternion space to learn quaternion embeddings for nodes and graphs. The Quaternion space, a hyper-complex vector space, provides highly meaningful computations through Hamilton product compared to the Euclidean and complex vector spaces. As a result, our QGNN can reduce the model size up to four times and enhance learning better graph representations. Experimental results show that the proposed QGNN produces state-of-the-art accuracies on a range of well-known benchmark datasets for three downstream tasks, including graph classification, semi-supervised node classification, and text (node) classification. Our code is available at: https://github.com/daiquocnguyen/QGNNComment: The extended abstract has been accepted to NeurIPS 2020 Workshop on Differential Geometry meets Deep Learning (DiffGeo4DL). The code in Pytorch and Tensorflow is available at: https://github.com/daiquocnguyen/QGN

    Narrative structure analysis with education and training videos for e-learning

    Full text link
    This paper deals with the problem ofstructuralizing education and training videos for high-level semantics extraction and nonlinear media presentation in e-learning applications. Drawing guidance from production knowledge in instructional media, we propose six main narrative structures employed in education and training videos for both motivation and demonstration during learning and practical training. We devise a powerful audiovisual feature set, accompanied by a hierarchical decision tree-based classification system to determine and discriminate between these structures. Based on a two-liered hierarchical model, we demonstrate that we can achieve an accuracy of 84.7% on a comprehensive set of education and training video data.<br /

    Two-view Graph Neural Networks for Knowledge Graph Completion

    Full text link
    We present an effective GNN-based knowledge graph embedding model, named WGE, to capture entity- and relation-focused graph structures. In particular, given the knowledge graph, WGE builds a single undirected entity-focused graph that views entities as nodes. In addition, WGE also constructs another single undirected graph from relation-focused constraints, which views entities and relations as nodes. WGE then proposes a GNN-based architecture to better learn vector representations of entities and relations from these two single entity- and relation-focused graphs. WGE feeds the learned entity and relation representations into a weighted score function to return the triple scores for knowledge graph completion. Experimental results show that WGE outperforms competitive baselines, obtaining state-of-the-art performances on seven benchmark datasets for knowledge graph completion.Comment: 13 pages; 3 tables; 3 figure

    Forecasting daily patient outflow from a ward having no real-time clinical data

    Full text link
    OBJECTIVE: Our study investigates different models to forecast the total number of next-day discharges from an open ward having no real-time clinical data. METHODS: We compared 5 popular regression algorithms to model total next-day discharges: (1) autoregressive integrated moving average (ARIMA), (2) the autoregressive moving average with exogenous variables (ARMAX), (3) k-nearest neighbor regression, (4) random forest regression, and (5) support vector regression. Although the autoregressive integrated moving average model relied on past 3-month discharges, nearest neighbor forecasting used median of similar discharges in the past in estimating next-day discharge. In addition, the ARMAX model used the day of the week and number of patients currently in ward as exogenous variables. For the random forest and support vector regression models, we designed a predictor set of 20 patient features and 88 ward-level features. RESULTS: Our data consisted of 12,141 patient visits over 1826 days. Forecasting quality was measured using mean forecast error, mean absolute error, symmetric mean absolute percentage error, and root mean square error. When compared with a moving average prediction model, all 5 models demonstrated superior performance with the random forests achieving 22.7% improvement in mean absolute error, for all days in the year 2014. CONCLUSIONS: In the absence of clinical information, our study recommends using patient-level and ward-level data in predicting next-day discharges. Random forest and support vector regression models are able to use all available features from such data, resulting in superior performance over traditional autoregressive methods. An intelligent estimate of available beds in wards plays a crucial role in relieving access block in emergency departments
    corecore